Model selection for neural networks: comparing MDL and NIC
نویسندگان
چکیده
We compare the MDL and NIC methods for determining the correct size of a feedforward neural network. The NIC method has to be adapted for this kind of networks. We include an experiment based on a small standard problem.
منابع مشابه
Statistical Analysis of Regularization Constant - From Bayes, MDL and NIC Points of View
In order to avoid over tting in neural learning, a regularization term is added to the loss function to be minimized. It is naturally derived from the Bayesian standpoint. The present paper studies how to determine the regularization constant from the points of view of the empirical Bayes approach, the maximum description length (MDL) approach, and the network information criterion (NIC) approa...
متن کاملEfficient Parameters Selection for CNTFET Modelling Using Artificial Neural Networks
In this article different types of artificial neural networks (ANN) were used for CNTFET (carbon nanotube transistors) simulation. CNTFET is one of the most likely alternatives to silicon transistors due to its excellent electronic properties. In determining the accurate output drain current of CNTFET, time lapsed and accuracy of different simulation methods were compared. The training data for...
متن کاملAn artificial intelligence model based on LS-SVM for third-party logistics provider selection
The use of third-party logistics (3PL) providers is regarded as new strategy in logistics management. The relationships by considering 3PL are sometimes more complicated than any classical logistics supplier relationships. These relationships have taken into account as a well-known way to highlight organizations' flexibilities to regard rapidly uncertain market conditions, follow core competenc...
متن کاملNovel Radial Basis Function Neural Networks based on Probabilistic Evolutionary and Gaussian Mixture Model for Satellites Optimum Selection
In this study, two novel learning algorithms have been applied on Radial Basis Function Neural Network (RBFNN) to approximate the functions with high non-linear order. The Probabilistic Evolutionary (PE) and Gaussian Mixture Model (GMM) techniques are proposed to significantly minimize the error functions. The main idea is concerning the various strategies to optimize the procedure of Gradient ...
متن کاملPruning with Minimum Description Length
The number of parameters in a model and its ability to generalize on the underlying data-generating machinery are tightly coupled entities. Neural networks consist usually of a large number of parameters, and pruning (the process of setting single parameters to zero) has been used to reduce the nets complexity in order to increase its generalization ability. Another less obvious approach is to ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1994